Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
2.
medRxiv ; 2024 Mar 22.
Article in English | MEDLINE | ID: mdl-38562711

ABSTRACT

Background: Health research that significantly impacts global clinical practice and policy is often published in high-impact factor (IF) medical journals. These outlets play a pivotal role in the worldwide dissemination of novel medical knowledge. However, researchers identifying as women and those affiliated with institutions in low- and middle-income countries (LMIC) have been largely underrepresented in high-IF journals across multiple fields of medicine. To evaluate disparities in gender and geographical representation among authors who have published in any of five top general medical journals, we conducted scientometric analyses using a large-scale dataset extracted from the New England Journal of Medicine (NEJM), Journal of the American Medical Association (JAMA), The British Medical Journal (BMJ), The Lancet, and Nature Medicine. Methods: Author metadata from all articles published in the selected journals between 2007 and 2022 were collected using the DimensionsAI platform. The Genderize.io API was then utilized to infer each author's likely gender based on their extracted first name. The World Bank country classification was used to map countries associated with researcher affiliations to the LMIC or the high-income country (HIC) category. We characterized the overall gender and country income category representation across the medical journals. In addition, we computed article-level diversity metrics and contrasted their distributions across the journals. Findings: We studied 151,536 authors across 49,764 articles published in five top medical journals, over a long period spanning 15 years. On average, approximately one-third (33.1%) of the authors of a given paper were inferred to be women; this result was consistent across the journals we studied. Further, 86.6% of the teams were exclusively composed of HIC authors; in contrast, only 3.9% were exclusively composed of LMIC authors. The probability of serving as the first or last author was significantly higher if the author was inferred to be a man (18.1% vs 16.8%, P < .01) or was affiliated with an institution in a HIC (16.9% vs 15.5%, P < .01). Our primary finding reveals that having a diverse team promotes further diversity, within the same dimension (i.e., gender or geography) and across dimensions. Notably, papers with at least one woman among the authors were more likely to also involve at least two LMIC authors (11.7% versus 10.4% in baseline, P < .001; based on inferred gender); conversely, papers with at least one LMIC author were more likely to also involve at least two women (49.4% versus 37.6%, P < .001; based on inferred gender). Conclusion: We provide a scientometric framework to assess authorship diversity. Our research suggests that the inclusiveness of high-impact medical journals is limited in terms of both gender and geography. We advocate for medical journals to adopt policies and practices that promote greater diversity and collaborative research. In addition, our findings offer a first step towards understanding the composition of teams conducting medical research globally and an opportunity for individual authors to reflect on their own collaborative research practices and possibilities to cultivate more diverse partnerships in their work.

3.
J Biomed Inform ; 152: 104631, 2024 04.
Article in English | MEDLINE | ID: mdl-38548006

ABSTRACT

Selection bias can arise through many aspects of a study, including recruitment, inclusion/exclusion criteria, input-level exclusion and outcome-level exclusion, and often reflects the underrepresentation of populations historically disadvantaged in medical research. The effects of selection bias can be further amplified when non-representative samples are used in artificial intelligence (AI) and machine learning (ML) applications to construct clinical algorithms. Building on the "Data Cards" initiative for transparency in AI research, we advocate for the addition of a participant flow diagram for AI studies detailing relevant sociodemographic and/or clinical characteristics of excluded participants across study phases, with the goal of identifying potential algorithmic biases before their clinical implementation. We include both a model for this flow diagram as well as a brief case study explaining how it could be implemented in practice. Through standardized reporting of participant flow diagrams, we aim to better identify potential inequities embedded in AI applications, facilitating more reliable and equitable clinical algorithms.


Subject(s)
Biomedical Research , Health Equity , Humans , Artificial Intelligence , Algorithms , Machine Learning
4.
PLOS Digit Health ; 3(1): e0000417, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38236824

ABSTRACT

The study provides a comprehensive review of OpenAI's Generative Pre-trained Transformer 4 (GPT-4) technical report, with an emphasis on applications in high-risk settings like healthcare. A diverse team, including experts in artificial intelligence (AI), natural language processing, public health, law, policy, social science, healthcare research, and bioethics, analyzed the report against established peer review guidelines. The GPT-4 report shows a significant commitment to transparent AI research, particularly in creating a systems card for risk assessment and mitigation. However, it reveals limitations such as restricted access to training data, inadequate confidence and uncertainty estimations, and concerns over privacy and intellectual property rights. Key strengths identified include the considerable time and economic investment in transparent AI research and the creation of a comprehensive systems card. On the other hand, the lack of clarity in training processes and data raises concerns about encoded biases and interests in GPT-4. The report also lacks confidence and uncertainty estimations, crucial in high-risk areas like healthcare, and fails to address potential privacy and intellectual property issues. Furthermore, this study emphasizes the need for diverse, global involvement in developing and evaluating large language models (LLMs) to ensure broad societal benefits and mitigate risks. The paper presents recommendations such as improving data transparency, developing accountability frameworks, establishing confidence standards for LLM outputs in high-risk settings, and enhancing industry research review processes. It concludes that while GPT-4's report is a step towards open discussions on LLMs, more extensive interdisciplinary reviews are essential for addressing bias, harm, and risk concerns, especially in high-risk domains. The review aims to expand the understanding of LLMs in general and highlights the need for new reflection forms on how LLMs are reviewed, the data required for effective evaluation, and addressing critical issues like bias and risk.

5.
BMJ Open Respir Res ; 10(1)2023 12 18.
Article in English | MEDLINE | ID: mdl-38114240

ABSTRACT

OBJECTIVES: The National Early Warning Score 2 (NEWS2) is validated for predicting acute deterioration, however, the binary grading of inspired oxygen fraction (FiO2) may limit performance. We evaluated the incorporation of FiO2 as a weighted categorical variable on NEWS2 prediction of patient deterioration. SETTING: Two hospitals at a single medical centre, Guy's and St Thomas' NHS Foundation Trust. DESIGN: Retrospective cohort of all ward admissions, with a viral respiratory infection (SARS-CoV-2/influenza). PARTICIPANTS: 3704 adult ward admissions were analysed between 01 January 2017 and 31 December 2021. METHODS: The NEWS-FiO2 score transformed FiO2 into a weighted categorical variable, from 0 to 3 points, substituting the original 0/2 points. The primary outcome was a composite of cardiac arrest, unplanned critical care admission or death within 24 hours of the observation. Sensitivity, positive predictive value (PPV), number needed to evaluate (NNE) and area under the receiver operating characteristic curve (AUROC) were calculated. Failure analysis for the time from trigger to outcome was compared by log-rank test. RESULTS: The mean age was 60.4±19.4 years, 52.6% were men, with a median Charlson Comorbidity of 0 (IQR 3). The primary outcome occurred in 493 (13.3%) patients, and the weighted FiO2 score was strongly associated with the outcome (p=<0.001). In patients receiving supplemental oxygen, 78.5% of scores were reclassified correctly and the AUROC was 0.81 (95% CI 0.81 to 0.81) for NEWS-FiO2 versus 0.77 (95% CI 0.77 to 0.77) for NEWS2. This improvement persisted in the whole cohort with a significantly higher failure rate for NEWS-FiO2 (p=<0.001). At the 5-point threshold, the PPV increased by 22.0% (NNE 6.7) for only a 3.9% decrease in sensitivity. CONCLUSION: Transforming FiO2 into a weighted categorical variable improved NEWS2 prediction for patient deterioration, significantly improving the PPV. Prospective external validation is required before institutional implementation.


Subject(s)
Early Warning Score , Male , Adult , Humans , Middle Aged , Aged , Female , Retrospective Studies , Prospective Studies , Oxygen , SARS-CoV-2
6.
BMJ Health Care Inform ; 30(1)2023 Nov 24.
Article in English | MEDLINE | ID: mdl-38007224

ABSTRACT

OBJECTIVES: Digital health inequality, observed as differential utilisation of digital tools between population groups, has not previously been quantified in the National Health Service (NHS). Deployment of universal digital health interventions, including a national smartphone app and online primary care services, allows measurement of digital inequality across a nation. We aimed to measure population factors associated with digital utilisation across 6356 primary care providers serving the population of England. METHODS: We used multivariable regression to test association of population and provider characteristics (including patient demographics, socioeconomic deprivation, disease burden, prescribing burden, geography and healthcare provider resource) with activation of two independent digital services during 2021/2022. RESULTS: We find a significant adjusted association between increased population deprivation and reduced digital utilisation across both interventions. Multivariable regression coefficients for most deprived quintiles correspond to 4.27 million patients across England where deprivation is associated with non-activation of the NHS App. CONCLUSION: Results are concerning for technologically driven widening of healthcare inequalities. Targeted incentive to digital is necessary to prevent digital disparity from becoming health outcomes disparity.


Subject(s)
Health Status Disparities , State Medicine , Humans , England/epidemiology , Healthcare Disparities
7.
medRxiv ; 2023 Oct 03.
Article in English | MEDLINE | ID: mdl-37873343

ABSTRACT

Pulse oximeters measure peripheral arterial oxygen saturation (SpO 2 ) noninvasively, while the gold standard (SaO 2 ) involves arterial blood gas measurement. There are known racial and ethnic disparities in their performance. BOLD is a new comprehensive dataset that aims to underscore the importance of addressing biases in pulse oximetry accuracy, which disproportionately affect darker-skinned patients. The dataset was created by harmonizing three Electronic Health Record databases (MIMIC-III, MIMIC-IV, eICU-CRD) comprising Intensive Care Unit stays of US patients. Paired SpO 2 and SaO 2 measurements were time-aligned and combined with various other sociodemographic and parameters to provide a detailed representation of each patient. BOLD includes 49,099 paired measurements, within a 5-minute window and with oxygen saturation levels between 70-100%. Minority racial and ethnic groups account for ∼25% of the data - a proportion seldom achieved in previous studies. The codebase is publicly available. Given the prevalent use of pulse oximeters in the hospital and at home, we hope that BOLD will be leveraged to develop debiasing algorithms that can result in more equitable healthcare solutions.

8.
iScience ; 26(10): 107924, 2023 Oct 20.
Article in English | MEDLINE | ID: mdl-37817930

ABSTRACT

Increasing awareness of health disparities has led to proposals for a pay-for-equity scheme. Implementing such proposals requires systematic methods of collecting and reporting health outcomes for targeted demographics over time. This lays the foundation for a shift from quality improvement projects (QIPs) to equality improvement projects (EQIPs) that could evaluate adherence to standards and progress toward health equity. We performed a scoping review on EQIPs to inform a new framework for quality improvement through a health equity lens. Forty studies implemented an intervention after identifying a disparity compared to 149 others which merely identified group differences. Most evaluated race-based differences and were conducted at the institutional level, with representation in both the inpatient and outpatient settings. EQIPs that improved equity leveraged multidisciplinary expertise, healthcare staff education, and developed tools to track health outcomes continuously. EQIPs can help bridge the inequality gap and form part of an incentivized systematic equality improvement framework.

9.
Lancet Digit Health ; 5(11): e831-e839, 2023 11.
Article in English | MEDLINE | ID: mdl-37890905

ABSTRACT

The growing recognition of differences in health outcomes across populations has led to a slow but increasing shift towards transparent reporting of patient outcomes. In addition, pay-for-equity initiatives, such as those proposed by the Centers for Medicare and Medicaid, will require the reporting of health outcomes across subgroups over time. Dashboards offer one means of visualising data in the health-care context that can highlight essential disparities in clinical outcomes, guide targeted quality-improvement efforts, and ultimately improve health equity. In this Viewpoint, we evaluate all studies that have reported the successful development of a disparity dashboard and share the data collected and unintended consequences reported. We propose a framework for systematic equality improvement through incentivisation of the collecting and reporting of health data and through implementation of reward systems to reduce health disparities.


Subject(s)
Health Equity , Aged , Humans , United States , Medicare , Delivery of Health Care , Quality Improvement , Health Facilities
10.
Lancet Digit Health ; 5(10): e737-e748, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37775190

ABSTRACT

The importance of big health data is recognised worldwide. Most UK National Health Service (NHS) care interactions are recorded in electronic health records, resulting in an unmatched potential for population-level datasets. However, policy reviews have highlighted challenges from a complex data-sharing landscape relating to transparency, privacy, and analysis capabilities. In response, we used public information sources to map all electronic patient data flows across England, from providers to more than 460 subsequent academic, commercial, and public data consumers. Although NHS data support a global research ecosystem, we found that multistage data flow chains limit transparency and risk public trust, most data interactions do not fulfil recommended best practices for safe data access, and existing infrastructure produces aggregation of duplicate data assets, thus limiting diversity of data and added value to end users. We provide recommendations to support data infrastructure transformation and have produced a website (https://DataInsights.uk) to promote transparency and showcase NHS data assets.

11.
Crit Care Clin ; 39(4): 795-813, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37704341

ABSTRACT

Critical care data contain information about the most physiologically fragile patients in the hospital, who require a significant level of monitoring. However, medical devices used for patient monitoring suffer from measurement biases that have been largely underreported. This article explores sources of bias in commonly used clinical devices, including pulse oximeters, thermometers, and sphygmomanometers. Further, it provides a framework for mitigating these biases and key principles to achieve more equitable health care delivery.


Subject(s)
Critical Care , Humans , Bias
13.
PLOS Glob Public Health ; 3(8): e0002252, 2023.
Article in English | MEDLINE | ID: mdl-37578942

ABSTRACT

Current methods to evaluate a journal's impact rely on the downstream citation mapping used to generate the Impact Factor. This approach is a fragile metric prone to being skewed by outlier values and does not speak to a researcher's contribution to furthering health outcomes for all populations. Therefore, we propose the implementation of a Diversity Factor to fulfill this need and supplement the current metrics. It is composed of four key elements: dataset properties, author country, author gender and departmental affiliation. Due to the significance of each individual element, they should be assessed independently of each other as opposed to being combined into a simplified score to be optimized. Herein, we discuss the necessity of such metrics, provide a framework to build upon, evaluate the current landscape through the lens of each key element and publish the findings on a freely available website that enables further evaluation. The OpenAlex database was used to extract the metadata of all papers published from 2000 until August 2022, and Natural language processing was used to identify individual elements. Features were then displayed individually on a static dashboard developed using TableauPublic, which is available at www.equitablescience.com. In total, 130,721 papers were identified from 7,462 journals where significant underrepresentation of LMIC and Female authors was demonstrated. These findings are pervasive and show no positive correlation with the Journal's Impact Factor. The systematic collection of the Diversity Factor concept would allow for more detailed analysis, highlight gaps in knowledge, and reflect confidence in the translation of related research. Conversion of this metric to an active pipeline would account for the fact that how we define those most at risk will change over time and quantify responses to particular initiatives. Therefore, continuous measurement of outcomes across groups and those investigating those outcomes will never lose importance. Moving forward, we encourage further revision and improvement by diverse author groups in order to better refine this concept.

14.
BMJ Health Care Inform ; 30(1)2023 Jun.
Article in English | MEDLINE | ID: mdl-37344002

ABSTRACT

Introduction In January, the National Institutes of Health (NIH) implemented a Data Management and Sharing Policy aiming to leverage data collected during NIH-funded research. The COVID-19 pandemic illustrated that this practice is equally vital for augmenting patient research. In addition, data sharing acts as a necessary safeguard against the introduction of analytical biases. While the pandemic provided an opportunity to curtail critical research issues such as reproducibility and validity through data sharing, this did not materialise in practice and became an example of 'Open Data in Appearance Only' (ODIAO). Here, we define ODIAO as the intent of data sharing without the occurrence of actual data sharing (eg, material or digital data transfers).Objective Propose a framework that states the main risks associated with data sharing, systematically present risk mitigation strategies and provide examples through a healthcare lens.Methods This framework was informed by critical aspects of both the Open Data Institute and the NIH's 2023 Data Management and Sharing Policy plan guidelines.Results Through our examination of legal, technical, reputational and commercial categories, we find barriers to data sharing ranging from misinterpretation of General Data Privacy Rule to lack of technical personnel able to execute large data transfers. From this, we deduce that at numerous touchpoints, data sharing is presently too disincentivised to become the norm.Conclusion In order to move towards Open Data, we propose the creation of mechanisms for incentivisation, beginning with recentring data sharing on patient benefits, additional clauses in grant requirements and committees to encourage adherence to data reporting practices.


Subject(s)
COVID-19 , Humans , United States , Pandemics , Reproducibility of Results , National Institutes of Health (U.S.) , Information Dissemination/methods
15.
PLOS Digit Health ; 2(4): e0000224, 2023 Apr.
Article in English | MEDLINE | ID: mdl-37036866

ABSTRACT

The ability of artificial intelligence to perpetuate bias at scale is increasingly recognized. Recently, proposals for implementing regulation that safeguards such discrimination have come under pressure due to the potential of such restrictions stifling innovation within the field. In this formal comment, we highlight the potential dangers of such views and explore key examples that define this relationship between health equity and innovation. We propose that health equity is a vital component of healthcare and should not be compromised to expedite the advancement of results for the few at the expense of vulnerable populations. A data-centered future that works for all will require funding bodies to incentivize equity-focused AI, and organizations must be held accountable for the differential impact of such algorithms post-deployment.

18.
Br J Anaesth ; 128(2): 343-351, 2022 Feb.
Article in English | MEDLINE | ID: mdl-34772497

ABSTRACT

BACKGROUND: Artificial intelligence (AI) has the potential to personalise mechanical ventilation strategies for patients with respiratory failure. However, current methodological deficiencies could limit clinical impact. We identified common limitations and propose potential solutions to facilitate translation of AI to mechanical ventilation of patients. METHODS: A systematic review was conducted in MEDLINE, Embase, and PubMed Central to February 2021. Studies investigating the application of AI to patients undergoing mechanical ventilation were included. Algorithm design and adherence to reporting standards were assessed with a rubric combining published guidelines, satisfying the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis [TRIPOD] statement. Risk of bias was assessed by using the Prediction model Risk Of Bias ASsessment Tool (PROBAST), and correspondence with authors to assess data and code availability. RESULTS: Our search identified 1,342 studies, of which 95 were included: 84 had single-centre, retrospective study design, with only one randomised controlled trial. Access to data sets and code was severely limited (unavailable in 85% and 87% of studies, respectively). On request, data and code were made available from 12 and 10 authors, respectively, from a list of 54 studies published in the last 5 yr. Ethnicity was frequently under-reported 18/95 (19%), as was model calibration 17/95 (18%). The risk of bias was high in 89% (85/95) of the studies, especially because of analysis bias. CONCLUSIONS: Development of algorithms should involve prospective and external validation, with greater code and data availability to improve confidence in and translation of this promising approach. TRIAL REGISTRATION NUMBER: PROSPERO - CRD42021225918.


Subject(s)
Artificial Intelligence , Respiration, Artificial/methods , Respiratory Insufficiency/therapy , Algorithms , Bias , Humans , Models, Theoretical , Randomized Controlled Trials as Topic , Research Design , Research Report/standards
SELECTION OF CITATIONS
SEARCH DETAIL
...